Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using this dataset from Oxford of 102 flower categories, you can see a few examples below.

The project is broken down into multiple steps:
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
# TODO: Make all necessary imports.
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import os
import time
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
tfds.disable_progress_bar()
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import json
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
print('Using:')
print('\t\u2022 TensorFlow version:', tf.__version__)
print('\t\u2022 tf.keras version:', tf.keras.__version__)
print('\t\u2022 Running on GPU' if tf.test.is_gpu_available() else '\t\u2022 GPU device not found. Running on CPU')
Using: • TensorFlow version: 2.4.0 • tf.keras version: 2.4.0 • GPU device not found. Running on CPU
Here you'll use tensorflow_datasets to load the Oxford Flowers 102 dataset. This dataset has 3 splits: 'train', 'test', and 'validation'. You'll also need to make sure the training data is normalized and resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet, but you'll still need to normalize and resize the images to the appropriate size.
# TODO: Load the dataset with TensorFlow Datasets.
dataset, dataset_info = tfds.load('oxford_flowers102', as_supervised = True, with_info = True)
# Check that dataset is a dictionary
print('dataset has type:', type(dataset))
# Print the keys of the dataset dictionary
print('\nThe keys of dataset are:', list(dataset.keys()))
# Display the dataset_info
dataset_info
dataset has type: <class 'dict'> The keys of dataset are: ['train', 'test', 'validation']
tfds.core.DatasetInfo(
name='oxford_flowers102',
full_name='oxford_flowers102/2.1.1',
description="""
The Oxford Flowers 102 dataset is a consistent of 102 flower categories commonly occurring
in the United Kingdom. Each class consists of between 40 and 258 images. The images have
large scale, pose and light variations. In addition, there are categories that have large
variations within the category and several very similar categories.
The dataset is divided into a training set, a validation set and a test set.
The training set and validation set each consist of 10 images per class (totalling 1020 images each).
The test set consists of the remaining 6149 images (minimum 20 per class).
""",
homepage='https://www.robots.ox.ac.uk/~vgg/data/flowers/102/',
data_path='C:\\Users\\bishebanm\\tensorflow_datasets\\oxford_flowers102\\2.1.1',
download_size=328.90 MiB,
dataset_size=331.34 MiB,
features=FeaturesDict({
'file_name': Text(shape=(), dtype=tf.string),
'image': Image(shape=(None, None, 3), dtype=tf.uint8),
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=102),
}),
supervised_keys=('image', 'label'),
splits={
'test': <SplitInfo num_examples=6149, num_shards=2>,
'train': <SplitInfo num_examples=1020, num_shards=1>,
'validation': <SplitInfo num_examples=1020, num_shards=1>,
},
citation="""@InProceedings{Nilsback08,
author = "Nilsback, M-E. and Zisserman, A.",
title = "Automated Flower Classification over a Large Number of Classes",
booktitle = "Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing",
year = "2008",
month = "Dec"
}""",
)
# TODO: Create a training set, a validation set and a test set.
training_set, validation_set, test_set = dataset['train'], dataset['validation'], dataset['test']
dataset_info.features['image']
dataset_info.features['label']
dataset_info.splits['train']
dataset_info.splits['validation']
dataset_info.splits['test']
# TODO: Get the number of examples in each set from the dataset info.
num_training_examples = dataset_info.splits['train'].num_examples
num_validation_examples = dataset_info.splits['validation'].num_examples
num_test_examples = dataset_info.splits['test'].num_examples
print('There are {:,} images in the training set'.format(num_training_examples))
print('\nThere are {:,} images in the validation set'.format(num_validation_examples))
print('\nThere are {:,} images in the test set'.format(num_test_examples))
# TODO: Get the number of classes in the dataset from the dataset info.
num_classes = dataset_info.features['label'].num_classes
print('\nThere are {:,} classes in our dataset'.format(num_classes))
There are 1,020 images in the training set There are 1,020 images in the validation set There are 6,149 images in the test set There are 102 classes in our dataset
# TODO: Print the shape and corresponding label of 3 images in the training set.
shape_images = dataset_info.features['image'].shape
print('The images in our dataset have shape:', shape_images)
for image, label in training_set.take(3):
print('\nOne image in the training set have:\n\u2022 dtype:', image.dtype, '\n\u2022 shape:', image.shape)
print('One image in the training set have the label of:', label.numpy())
The images in our dataset have shape: (None, None, 3) One image in the training set have: • dtype: <dtype: 'uint8'> • shape: (500, 667, 3) One image in the training set have the label of: 72 One image in the training set have: • dtype: <dtype: 'uint8'> • shape: (500, 666, 3) One image in the training set have the label of: 84 One image in the training set have: • dtype: <dtype: 'uint8'> • shape: (670, 500, 3) One image in the training set have the label of: 70
# TODO: Plot 1 image from the training set. Set the title
# of the plot to the corresponding image label.
for image, label in training_set.take(1):
image = image.numpy().squeeze()
label = label.numpy()
#TODO: use subplot
plt.imshow(image, cmap= plt.cm.binary)
plt.colorbar()
plt.title(str(label))
plt.show()
You'll also need to load in a mapping from label to category name. You can find this in the file label_map.json. It's a JSON object which you can read in with the json module. This will give you a dictionary mapping the integer coded labels to the actual names of the flowers.
with open('label_map.json', 'r') as f:
class_names = json.load(f)
print(class_names)
{'21': 'fire lily', '3': 'canterbury bells', '45': 'bolero deep blue', '1': 'pink primrose', '34': 'mexican aster', '27': 'prince of wales feathers', '7': 'moon orchid', '16': 'globe-flower', '25': 'grape hyacinth', '26': 'corn poppy', '79': 'toad lily', '39': 'siam tulip', '24': 'red ginger', '67': 'spring crocus', '35': 'alpine sea holly', '32': 'garden phlox', '10': 'globe thistle', '6': 'tiger lily', '93': 'ball moss', '33': 'love in the mist', '9': 'monkshood', '102': 'blackberry lily', '14': 'spear thistle', '19': 'balloon flower', '100': 'blanket flower', '13': 'king protea', '49': 'oxeye daisy', '15': 'yellow iris', '61': 'cautleya spicata', '31': 'carnation', '64': 'silverbush', '68': 'bearded iris', '63': 'black-eyed susan', '69': 'windflower', '62': 'japanese anemone', '20': 'giant white arum lily', '38': 'great masterwort', '4': 'sweet pea', '86': 'tree mallow', '101': 'trumpet creeper', '42': 'daffodil', '22': 'pincushion flower', '2': 'hard-leaved pocket orchid', '54': 'sunflower', '66': 'osteospermum', '70': 'tree poppy', '85': 'desert-rose', '99': 'bromelia', '87': 'magnolia', '5': 'english marigold', '92': 'bee balm', '28': 'stemless gentian', '97': 'mallow', '57': 'gaura', '40': 'lenten rose', '47': 'marigold', '59': 'orange dahlia', '48': 'buttercup', '55': 'pelargonium', '36': 'ruby-lipped cattleya', '91': 'hippeastrum', '29': 'artichoke', '71': 'gazania', '90': 'canna lily', '18': 'peruvian lily', '98': 'mexican petunia', '8': 'bird of paradise', '30': 'sweet william', '17': 'purple coneflower', '52': 'wild pansy', '84': 'columbine', '12': "colt's foot", '11': 'snapdragon', '96': 'camellia', '23': 'fritillary', '50': 'common dandelion', '44': 'poinsettia', '53': 'primula', '72': 'azalea', '65': 'californian poppy', '80': 'anthurium', '76': 'morning glory', '37': 'cape flower', '56': 'bishop of llandaff', '60': 'pink-yellow dahlia', '82': 'clematis', '58': 'geranium', '75': 'thorn apple', '41': 'barbeton daisy', '95': 'bougainvillea', '43': 'sword lily', '83': 'hibiscus', '78': 'lotus lotus', '88': 'cyclamen', '94': 'foxglove', '81': 'frangipani', '74': 'rose', '89': 'watercress', '73': 'water lily', '46': 'wallflower', '77': 'passion flower', '51': 'petunia'}
There are 2 ways to find the flower name: eithrr using class_names[str(label+1)] or using get_label_name(label).
From the above dictionary which is from an external json file, we can see that 73 corresponds to water lily. Below is a figure of water lily with label 72! I have check the naming convention, for some other images too, and noticed that, 1 should be added to the label number class_names[str(label+1)].
However, if we use dataset_info.features['label'], we get the correct label.
# TODO: Plot 1 image from the training set. Set the title
# of the plot to the corresponding class name.
for image, label in training_set.take(1):
image = image.numpy().squeeze()
label = label.numpy()
plt.imshow(image, cmap= plt.cm.binary)
plt.title(class_names[str(label+1)])
plt.colorbar()
plt.show()
print('The label of this image is:', label)
get_label_name = dataset_info.features['label'].int2str
print(get_label_name(label))
The label of this image is: 72 water lily
# TODO: Plot 1 image from the training set. Set the title
# of the plot to the corresponding class name.
for image, label in training_set.take(2):
image = image.numpy().squeeze()
label = label.numpy()
plt.imshow(image, cmap= plt.cm.binary)
plt.title(class_names[str(label+1)])
plt.colorbar()
plt.show()
print('The label of this image is:', label)
get_label_name = dataset_info.features['label'].int2str
print(get_label_name(label))
The label of this image is: 84 desert-rose
# TODO: Create a pipeline for each set.
batch_size = 32
image_size = 224
def format_image(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, (image_size, image_size))
image /= 255
return image, label
training_batches1 = training_set.shuffle(num_training_examples//4).map(format_image)
validation_batches1 = validation_set.map(format_image)
testing_batches1 = test_set.map(format_image)
training_batches = training_batches1.batch(batch_size).prefetch(1)
validation_batches = validation_batches1.batch(batch_size).prefetch(1)
testing_batches = testing_batches1.batch(batch_size).prefetch(1)
The ImageDataGenerator class has three methods:
.flow is for When your data comes in the form of a NumPy array. .flow_from_directory is for When your data lives locally on your machine with a separate folder containing images of each class. .flow_from_dataframe is for When your data lives locally on your machine in a single folder source. So here, flow() should be used.
flow() requires seperate X and y, so, the follwoing function is used to get X and y from data.
Note: I have not used the augmented data to train the model!
def get_X_y(dataset):
""" return X, y numpy arrays from tensorflow dataset """
images = []
labels = []
for x,y in dataset:
images.append(x.numpy())
labels.append(y.numpy())
return np.asarray(images), np.asarray(labels)
X_train, y_train = get_X_y(training_batches1)
image_gen_train = ImageDataGenerator(rotation_range=40,
zoom_range=0.2,
horizontal_flip=True).flow(X_train, y_train,
batch_size = batch_size,
shuffle = False)
image_gen_train = image_gen_train.prefetch(1)
Validation and test data generators without augmentation and shuffling:
X_val, y_val = get_X_y(validation_batches1)
image_gen_val = ImageDataGenerator().flow(X_val, y_val,
batch_size = batch_size,
shuffle=False)
image_gen_val = image_gen_val.prefetch(1)
X_test, y_test = get_X_y(testing_batches1)
image_gen_test = ImageDataGenerator().flow(X_test, y_test,
batch_size = batch_size,
shuffle=False)
image_gen_test = image_gen_test.prefetch(1)
# This function will plot images in the form of a grid with 1 row and 5 columns
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip(images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Now, it's time to build and train the classifier.
The MobileNet pre-trained model is used from TensorFlow Hub to get the image features. Build and train a new feed-forward classifier using those features.
Things are done:
# TODO: Build and train your network.
URL= "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL, input_shape=(image_size, image_size,3))
feature_extractor.trainable = False
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(4*num_classes, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(3*num_classes, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(2*num_classes, activation = 'relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(num_classes, activation = 'softmax')
])
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= keras_layer (KerasLayer) (None, 1280) 2257984 _________________________________________________________________ dense (Dense) (None, 408) 522648 _________________________________________________________________ dropout (Dropout) (None, 408) 0 _________________________________________________________________ dense_1 (Dense) (None, 306) 125154 _________________________________________________________________ dropout_1 (Dropout) (None, 306) 0 _________________________________________________________________ dense_2 (Dense) (None, 204) 62628 _________________________________________________________________ dropout_2 (Dropout) (None, 204) 0 _________________________________________________________________ dense_3 (Dense) (None, 102) 20910 ================================================================= Total params: 2,989,324 Trainable params: 731,340 Non-trainable params: 2,257,984 _________________________________________________________________
model_CNN = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4*num_classes, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(3*num_classes, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(2*num_classes, activation = 'relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(num_classes, activation = 'softmax')
])
for image_batch, label_batch in testing_batches.take(1):
ps = model.predict(image_batch)
images = image_batch.numpy().squeeze()
labels = label_batch.numpy()
plt.figure(figsize=(10,15))
#print(ps.size)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(images[n], cmap = plt.cm.binary)
color = 'green' if np.argmax(ps[n]) == labels[n] else 'red'
#plt.title(class_names[str(np.argmax(ps[n])+1)], color=color)
plt.title(get_label_name(np.argmax(ps[n])), color=color)
plt.axis('off')
#print('\ncorrect is :', get_label_name(labels[n]))
#print(class_names[str(labels[n]+1)])
#print('but estimated is :', get_label_name(np.argmax(ps[n])))
Plot along the probabilities:
for image_batch, label_batch in testing_batches.take(1):
ps = model.predict(image_batch)
first_image = image_batch.numpy().squeeze()[0]
first_label = label_batch.numpy()[0]
fig, (ax1, ax2) = plt.subplots(figsize=(6,20), ncols=2)
ax1.imshow(first_image, cmap = plt.cm.binary)
ax1.axis('off')
color = 'green' if np.argmax(ps[0]) == first_label else 'red'
ax1.set_title(class_names[str(np.argmax(ps[0])+1)], color=color)
ax2.barh(np.arange(num_classes), ps[0])
ax2.set_aspect(0.1)
ax2.set_yticks(np.arange(num_classes))
ax2.set_yticklabels(class_names, size='small');
ax2.set_title('Class Probability')
ax2.set_xlim(0, 1.1)
plt.tight_layout()
print('\ncorrect is :', get_label_name(first_label))
#print('\ncorrect is :',class_names[str(first_label+1)])
print('but estimated is :', get_label_name(np.argmax(ps[0])))
#print('but estimated is :', class_names[str(np.argmax(ps[0])+1)])
correct is : barbeton daisy but estimated is : giant white arum lily
EPOCHS = 20
# Stop training when there is no improvement in the validation loss for 10 consecutive epochs
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
# Save the Model with the lowest validation loss
save_best = tf.keras.callbacks.ModelCheckpoint('./best_model.h5',
monitor='val_loss',
save_best_only=True)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#training_batches, validation_batches
history = model.fit(training_batches,
epochs = EPOCHS,
validation_data=validation_batches,
callbacks=[early_stopping, save_best])
Epoch 1/20 32/32 [==============================] - 38s 1s/step - loss: 4.7048 - accuracy: 0.0162 - val_loss: 4.3202 - val_accuracy: 0.1294 Epoch 2/20 32/32 [==============================] - 33s 1s/step - loss: 4.0947 - accuracy: 0.1045 - val_loss: 3.1658 - val_accuracy: 0.2843 Epoch 3/20 32/32 [==============================] - 33s 1s/step - loss: 2.8904 - accuracy: 0.2987 - val_loss: 2.1757 - val_accuracy: 0.4490 Epoch 4/20 32/32 [==============================] - 34s 1s/step - loss: 1.9341 - accuracy: 0.4673 - val_loss: 1.5818 - val_accuracy: 0.5971 Epoch 5/20 32/32 [==============================] - 34s 1s/step - loss: 1.2691 - accuracy: 0.6313 - val_loss: 1.4205 - val_accuracy: 0.6284 Epoch 6/20 32/32 [==============================] - 34s 1s/step - loss: 1.0125 - accuracy: 0.7074 - val_loss: 1.2765 - val_accuracy: 0.6510 Epoch 7/20 32/32 [==============================] - 33s 1s/step - loss: 0.6728 - accuracy: 0.7833 - val_loss: 1.1843 - val_accuracy: 0.6980 Epoch 8/20 32/32 [==============================] - 34s 1s/step - loss: 0.4900 - accuracy: 0.8706 - val_loss: 1.1461 - val_accuracy: 0.6902 Epoch 9/20 32/32 [==============================] - 33s 1s/step - loss: 0.3292 - accuracy: 0.9076 - val_loss: 1.1479 - val_accuracy: 0.7078 Epoch 10/20 32/32 [==============================] - 33s 1s/step - loss: 0.3216 - accuracy: 0.9026 - val_loss: 1.0761 - val_accuracy: 0.7422 Epoch 11/20 32/32 [==============================] - 35s 1s/step - loss: 0.2136 - accuracy: 0.9473 - val_loss: 1.1317 - val_accuracy: 0.7167 Epoch 12/20 32/32 [==============================] - 36s 1s/step - loss: 0.1888 - accuracy: 0.9491 - val_loss: 1.0619 - val_accuracy: 0.7402 Epoch 13/20 32/32 [==============================] - 34s 1s/step - loss: 0.1685 - accuracy: 0.9586 - val_loss: 1.0555 - val_accuracy: 0.7284 Epoch 14/20 32/32 [==============================] - 34s 1s/step - loss: 0.1853 - accuracy: 0.9489 - val_loss: 0.9945 - val_accuracy: 0.7441 Epoch 15/20 32/32 [==============================] - 34s 1s/step - loss: 0.1188 - accuracy: 0.9766 - val_loss: 1.0555 - val_accuracy: 0.7382 Epoch 16/20 32/32 [==============================] - 35s 1s/step - loss: 0.0904 - accuracy: 0.9821 - val_loss: 1.1418 - val_accuracy: 0.7284 Epoch 17/20 32/32 [==============================] - 34s 1s/step - loss: 0.1111 - accuracy: 0.9711 - val_loss: 1.0498 - val_accuracy: 0.7461 Epoch 18/20 32/32 [==============================] - 33s 1s/step - loss: 0.0585 - accuracy: 0.9895 - val_loss: 1.0969 - val_accuracy: 0.7333 Epoch 19/20 32/32 [==============================] - 33s 1s/step - loss: 0.0758 - accuracy: 0.9809 - val_loss: 1.0740 - val_accuracy: 0.7461 Epoch 20/20 32/32 [==============================] - 33s 1s/step - loss: 0.0928 - accuracy: 0.9681 - val_loss: 1.0775 - val_accuracy: 0.7480
#training_batches, validation_batches
model_CNN.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history_CNN = model_CNN.fit(training_batches,
epochs = EPOCHS,
validation_data=validation_batches,
callbacks=[early_stopping, save_best])
Epoch 1/20 32/32 [==============================] - 93s 3s/step - loss: 16.4437 - accuracy: 0.0146 - val_loss: 4.6247 - val_accuracy: 0.0088 Epoch 2/20 32/32 [==============================] - 87s 3s/step - loss: 4.6269 - accuracy: 0.0066 - val_loss: 4.6154 - val_accuracy: 0.0147 Epoch 3/20 32/32 [==============================] - 87s 3s/step - loss: 4.5845 - accuracy: 0.0223 - val_loss: 4.5785 - val_accuracy: 0.0108 Epoch 4/20 32/32 [==============================] - 87s 3s/step - loss: 4.4516 - accuracy: 0.0557 - val_loss: 4.5622 - val_accuracy: 0.0225 Epoch 5/20 32/32 [==============================] - 86s 3s/step - loss: 3.7959 - accuracy: 0.2050 - val_loss: 4.5595 - val_accuracy: 0.0382 Epoch 6/20 32/32 [==============================] - 87s 3s/step - loss: 2.1107 - accuracy: 0.5364 - val_loss: 5.0415 - val_accuracy: 0.0510 Epoch 7/20 32/32 [==============================] - 89s 3s/step - loss: 0.8616 - accuracy: 0.8062 - val_loss: 6.3136 - val_accuracy: 0.0461 Epoch 8/20 32/32 [==============================] - 90s 3s/step - loss: 0.5022 - accuracy: 0.9068 - val_loss: 6.4110 - val_accuracy: 0.0382 Epoch 9/20 32/32 [==============================] - 85s 3s/step - loss: 0.2264 - accuracy: 0.9616 - val_loss: 6.6922 - val_accuracy: 0.0363 Epoch 10/20 32/32 [==============================] - 85s 3s/step - loss: 0.2471 - accuracy: 0.9584 - val_loss: 6.6919 - val_accuracy: 0.0314 Epoch 11/20 32/32 [==============================] - 87s 3s/step - loss: 0.2187 - accuracy: 0.9616 - val_loss: 6.4369 - val_accuracy: 0.0510 Epoch 12/20 32/32 [==============================] - 88s 3s/step - loss: 0.2026 - accuracy: 0.9489 - val_loss: 6.2198 - val_accuracy: 0.0618 Epoch 13/20 32/32 [==============================] - 89s 3s/step - loss: 0.0629 - accuracy: 0.9859 - val_loss: 7.1560 - val_accuracy: 0.0402 Epoch 14/20 32/32 [==============================] - 87s 3s/step - loss: 0.0908 - accuracy: 0.9820 - val_loss: 6.5066 - val_accuracy: 0.0559 Epoch 15/20 32/32 [==============================] - 85s 3s/step - loss: 0.1019 - accuracy: 0.9720 - val_loss: 6.7502 - val_accuracy: 0.0529
print('Is there a GPU Available:', tf.test.is_gpu_available())
Is there a GPU Available: False
See how many are predicted correctly:
for image_batch, label_batch in testing_batches.take(1):
ps = model.predict(image_batch)
images = image_batch.numpy().squeeze()
labels = label_batch.numpy()
plt.figure(figsize=(10,15))
#print(ps.size)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(images[n], cmap = plt.cm.binary)
color = 'green' if np.argmax(ps[n]) == labels[n] else 'red'
plt.title(class_names[str(np.argmax(ps[n])+1)], color=color)
plt.axis('off')
#print('\ncorrect is :', get_label_name(labels[n]))
#print('and estimated is :', get_label_name(np.argmax(ps[n])))
# TODO: Plot the loss and accuracy values achieved during training for the training and validation set.
# Check that history.history is a dictionary
print('history.history has type:', type(history.history))
# Print the keys of the history.history dictionary
print('\nThe keys of history.history are:', list(history.history.keys()))
history.history has type: <class 'dict'> The keys of history.history are: ['loss', 'accuracy', 'val_loss', 'val_accuracy']
training_accuracy = history.history['accuracy']
validation_accuracy = history.history['val_accuracy']
training_loss = history.history['loss']
validation_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, training_accuracy, label='Training Accuracy')
plt.plot(epochs_range, validation_accuracy, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, training_loss, label='Training Loss')
plt.plot(epochs_range, validation_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
To test the network, I use model.evaluate, and not model.predict. Because:
.predict() generates output predictions based on the input you pass it.
.evaluate() computes the loss based on the input you pass it, along with any other metrics that you requested in the metrics param when you compiled your model.
# TODO: Print the loss and accuracy values achieved on the entire test set.
loss, accuracy = model.evaluate(testing_batches)
print('\nLoss on the TEST Set: {:,.3f}'.format(loss))
print('Accuracy on the TEST Set: {:.3%}'.format(accuracy))
193/193 [==============================] - 100s 518ms/step - loss: 1.2670 - accuracy: 0.7104 Loss on the TEST Set: 1.267 Accuracy on the TEST Set: 71.036%
# TODO: Print the loss and accuracy values achieved on the entire test set.
loss_CNN, accuracy_CNN = model_CNN.evaluate(testing_batches)
print('\nLoss on the TEST Set uisng CNN model: {:,.3f}'.format(loss_CNN))
print('Accuracy on the TEST Set using CNN model: {:.3%}'.format(accuracy_CNN))
193/193 [==============================] - 86s 445ms/step - loss: 7.0566 - accuracy: 0.0350 Loss on the TEST Set uisng CNN model: 7.057 Accuracy on the TEST Set using CNN model: 3.497%
Now that your network is trained, save the model so you can load it later for making inference. In the cell below save your model as a Keras model (i.e. save it as an HDF5 file).
Model can be saved as bellow, howver, the best modle is already saved abovem, as I have used callbacks=[save_best].
# TODO: Save your trained model as a Keras model.
t = time.time()
saved_keras_model_filepath = './{}.h5'.format(int(t))
model.save(saved_keras_model_filepath)
Load the Keras model you saved above.
The current model is the bset_model.h5 that is saved when model was fitted to the data because I have used .fit and callbacks=[save_best]. So, the three models including bset_model.h5, time.h5, the reloded model saved_keras_model_filepath are the same.
To reload a model that is from Transfer Learning technigue from hub, custom_objects={'KerasLayer':hub.KerasLayer} should be used.
#saved_keras_model_filepath = '1610045803.h5'
saved_keras_model_filepath = 'best_model.h5'
# TODO: Load the Keras model
reloaded_keras_model = tf.keras.models.load_model(saved_keras_model_filepath,custom_objects={'KerasLayer':hub.KerasLayer})
reloaded_keras_model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= keras_layer (KerasLayer) (None, 1280) 2257984 _________________________________________________________________ dense (Dense) (None, 408) 522648 _________________________________________________________________ dropout (Dropout) (None, 408) 0 _________________________________________________________________ dense_1 (Dense) (None, 306) 125154 _________________________________________________________________ dropout_1 (Dropout) (None, 306) 0 _________________________________________________________________ dense_2 (Dense) (None, 204) 62628 _________________________________________________________________ dropout_2 (Dropout) (None, 204) 0 _________________________________________________________________ dense_3 (Dense) (None, 102) 20910 ================================================================= Total params: 2,989,324 Trainable params: 731,340 Non-trainable params: 2,257,984 _________________________________________________________________
Compare current model with the loaded model:
for image_batch, label_batch in testing_batches.take(1):
prediction_1 = model.predict(image_batch)
prediction_2 = reloaded_keras_model.predict(image_batch)
difference = np.abs(prediction_1 - prediction_2)
print(difference.max())
0.6641708
model = reloaded_keras_model
Now you'll write a function that uses your trained network for inference. Write a function called predict that takes an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:
probs, classes = predict(image_path, model, top_k)
If top_k=5 the output of the predict function should be something like this:
probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.
The predict function will also need to handle pre-processing the input image such that it can be used by your model. We recommend you write a separate function called process_image that performs the pre-processing. You can then call the process_image function from the predict function.
The process_image function should take in an image (in the form of a NumPy array) and return an image in the form of a NumPy array with shape (224, 224, 3).
First, you should convert your image into a TensorFlow Tensor and then resize it to the appropriate size using tf.image.resize.
Second, the pixel values of the input images are typically encoded as integers in the range 0-255, but the model expects the pixel values to be floats in the range 0-1. Therefore, you'll also need to normalize the pixel values.
Finally, convert your image back to a NumPy array using the .numpy() method.
# TODO: Create the process_image function
def process_image(image, image_size):
image=tf.convert_to_tensor(image,tf.float32)
image=tf.image.resize(image, (image_size, image_size))
image/=255
image= image.numpy()
return image
To check your process_image function we have provided 4 images in the ./test_images/ folder:
The code below loads one of the above images using PIL and plots the original image alongside the image produced by your process_image function. If your process_image function works, the plotted image should be the correct size.
image_path1 = './test_images/cautleya_spicata.jpg'
image_path2 = './test_images/hard-leaved_pocket_orchid.jpg'
image_path3 = './test_images/orange_dahlia.jpg'
image_path4 = './test_images/wild_pansy.jpg'
im = Image.open(image_path1)
print(im.format)
print(im.size)
print(im.mode)
JPEG (500, 750) RGB
test_image = np.asarray(im)
print(type(test_image))
print(test_image.shape)
<class 'numpy.ndarray'> (750, 500, 3)
processed_test_image = process_image(test_image, image_size)
print(type(processed_test_image))
print(processed_test_image.shape)
fig, (ax1, ax2) = plt.subplots(figsize=(10,10), ncols=2)
ax1.imshow(test_image)
ax1.set_title('Original Image')
ax2.imshow(processed_test_image)
ax2.set_title('Processed Image')
plt.tight_layout()
plt.show()
<class 'numpy.ndarray'> (224, 224, 3)
Once you can get images in the correct format, it's time to write the predict function for making inference with your model.
Remember, the predict function should take an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:
probs, classes = predict(image_path, model, top_k)
If top_k=5 the output of the predict function should be something like this:
probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.
Note: The image returned by the process_image function is a NumPy array with shape (224, 224, 3) but the model expects the input images to be of shape (1, 224, 224, 3). This extra dimension represents the batch size. We suggest you use the np.expand_dims() function to add the extra dimension.
# TODO: Create the predict function
def predict(image_path, model, top_k,image_size,class_names):
im = Image.open(image_path)
image = np.asarray(im)
# WORNG: Do not use the next line, then the "predict" function will result in same result for all images
#processed_image = process_image(image,image_size)
#print(processed_image.shape)
#print(np.expand_dims(processed_test_image, axis=0).shape)
ps = model.predict(np.expand_dims(process_image(image,image_size), axis=0))
#print(ps)
top_values, top_indices = tf.math.top_k(ps, top_k)
#print(top_values)
#print(top_indices)
#return top_values.numpy(), top_indices.numpy()
top_classes = [class_names[str(value+1)] for value in top_indices.cpu().numpy()[0]]
return top_values.numpy()[0], top_classes
top_k=5
print('\nhard-leaved_pocket_orchid:')
image_path = './test_images/hard-leaved_pocket_orchid.jpg'
probs, classes = predict(image_path, model, top_k,image_size,class_names )
print(probs)
print(classes)
print('\ncautleya_spicata:')
image_path1 = './test_images/cautleya_spicata.jpg'
probs, classes = predict(image_path1, model, top_k,image_size,class_names )
print(probs)
print(classes)
print('\norange_dahlia:')
image_path3 = './test_images/orange_dahlia.jpg'
probs, classes = predict(image_path3, model, top_k,image_size,class_names )
print(probs)
print(classes)
print('\nwild_pansy:')
image_path4 = './test_images/wild_pansy.jpg'
probs, classes = predict(image_path4, model, top_k,image_size,class_names )
print(probs)
print(classes)
hard-leaved_pocket_orchid: tf.Tensor([[ 1 2 39 51 19]], shape=(1, 5), dtype=int32) [9.9993658e-01 2.0193118e-05 1.1002538e-05 9.2813907e-06 4.2069596e-06] ['hard-leaved pocket orchid', 'canterbury bells', 'lenten rose', 'wild pansy', 'giant white arum lily'] cautleya_spicata: tf.Tensor([[60 23 45 10 14]], shape=(1, 5), dtype=int32) [9.98736918e-01 1.05577812e-03 1.20000754e-04 3.29696632e-05 1.36191438e-05] ['cautleya spicata', 'red ginger', 'wallflower', 'snapdragon', 'yellow iris'] orange_dahlia: tf.Tensor([[58 62 55 4 33]], shape=(1, 5), dtype=int32) [0.7585914 0.0967942 0.0323455 0.02963103 0.02438846] ['orange dahlia', 'black-eyed susan', 'bishop of llandaff', 'english marigold', 'mexican aster'] wild_pansy: tf.Tensor([[51 33 63 85 83]], shape=(1, 5), dtype=int32) [9.9985445e-01 9.8562210e-05 2.1552070e-05 1.4471984e-05 3.0102246e-06] ['wild pansy', 'mexican aster', 'silverbush', 'tree mallow', 'columbine']
It's always good to check the predictions made by your model to make sure they are correct. To check your predictions we have provided 4 images in the ./test_images/ folder:
In the cell below use matplotlib to plot the input image alongside the probabilities for the top 5 classes predicted by your model. Plot the probabilities as a bar graph. The plot should look like this:

You can convert from the class integer labels to actual flower names using class_names.
First plot the image with all probabilities for all classess:
for image_batch, label_batch in testing_batches.take(1):
ps = model.predict(image_batch)
first_image = image_batch.numpy().squeeze()[0]
first_label = label_batch.numpy()[0]
print('correct label is:', first_label)
print('correct name is:', class_names[str(first_label+1)])
print('predicted name is:', class_names[str(np.argmax(ps[0])+1)])
fig, (ax1, ax2) = plt.subplots(figsize=(6,20), ncols=2)
ax1.imshow(first_image, cmap = plt.cm.binary)
ax1.axis('off')
color = 'green' if np.argmax(ps[0]) == first_label else 'red'
ax1.set_title(class_names[str(np.argmax(ps[0])+1)], color=color)
ax2.barh(np.arange(num_classes), ps[0])
ax2.set_aspect(0.1)
ax2.set_yticks(np.arange(num_classes))
ax2.set_yticklabels(class_names, size='small');
ax2.set_title('Class Probability')
ax2.set_xlim(0, 1.1)
plt.tight_layout()
correct label is: 40 correct name is: barbeton daisy predicted name is: barbeton daisy
Find the top_k classes for the above example:
#ps = model.predict(np.expand_dims(process_image(first_image,image_size), axis=0))
print(ps.shape)
print('ps[0]')
print(ps[0].shape)
top_values, top_indices = tf.math.top_k(ps[0], top_k)
top_classes = [class_names[str(value+1)] for value in top_indices.numpy()]
print(top_indices)
print(top_values.numpy()[0])
print(top_classes)
(32, 102) ps[0] (102,) tf.Tensor([40 65 33 53 85], shape=(5,), dtype=int32) 0.99368227 ['barbeton daisy', 'osteospermum', 'mexican aster', 'sunflower', 'tree mallow']
# TODO: Plot the input image along with the top 5 classes
def Sanity_Check(image_path0, model, top_k, image_size, class_names):
probs, classes = predict(image_path0, model, top_k,image_size,class_names)
#print(probs)
#print(np.argmax(probs))
#print(classes)
im = Image.open(image_path0)
test_image = np.asarray(im)
fig, (ax1, ax2) = plt.subplots(figsize=(6,20), ncols=2)
ax1.imshow(process_image(test_image, image_size), cmap = plt.cm.binary)
ax1.axis('off')
#ax1.set_title(get_label_name(str(argmax[probs])))
#ax1.set_title(class_names[str(label)])
ax2.barh(classes[::-1], probs[::-1])#argmax(ps[0])
ax2.set_aspect(0.1)
ax2.set_title('Class Probability')
ax2.set_xlim(0, 1.1)
plt.tight_layout()
return None
top_k=5
image_path1 = './test_images/cautleya_spicata.jpg'
image_path2 = './test_images/hard-leaved_pocket_orchid.jpg'
image_path3 = './test_images/orange_dahlia.jpg'
image_path4 = './test_images/wild_pansy.jpg'
Sanity_Check(image_path1, model, top_k,image_size,class_names)
Sanity_Check(image_path2, model, top_k,image_size,class_names)
Sanity_Check(image_path3, model, top_k,image_size,class_names)
Sanity_Check(image_path4, model, top_k,image_size,class_names)
tf.Tensor([[60 23 45 10 14]], shape=(1, 5), dtype=int32) tf.Tensor([[ 1 2 39 51 19]], shape=(1, 5), dtype=int32) tf.Tensor([[58 62 55 4 33]], shape=(1, 5), dtype=int32) tf.Tensor([[51 33 63 85 83]], shape=(1, 5), dtype=int32)